Independent component analysis (ICA) is a computational method for separating a multivariate signal into additive subcomponents supposing the mutual statistical independence of the non-Gaussian source signals. It is a special case of blind source separation.
Contents |
When the independence assumption is correct, blind ICA separation of a mixed signal gives very good results. It is also used for signals that are not supposed to be generated by a mixing for analysis purposes. A simple application of ICA is the "cocktail party problem", where the underlying speech signals are separated from a sample data consisting of people talking simultaneously in a room. Usually the problem is simplified by assuming no time delays or echoes. An important note to consider is that if N sources are present, at least N observations (e.g. microphones) are needed to get the original signals. This constitutes the square case (J = D, where D is the input dimension of the data and J is the dimension of the model). Other cases of underdetermined (J < D) and overdetermined (J > D) have been investigated.
ICA finds the independent components (aka factors, latent variables or sources) by maximizing the statistical independence of the estimated components. We may choose one of many ways to define independence, and this choice governs the form of the ICA algorithms. The two broadest definitions of independence for ICA are
1) Minimization of Mutual Information
2) Maximization of non-Gaussianity
The Minimization-of-Mutual information (MMI) family of ICA algorithms uses measures like Kullback-Leibler Divergence and maximum-entropy. The Non-Gaussianity family of ICA algorithms, motivated by the central limit theorem, uses kurtosis and negentropy.
Typical algorithms for ICA use centering, whitening (usually with the eigenvalue decomposition), and dimensionality reduction as preprocessing steps in order to simplify and reduce the complexity of the problem for the actual iterative algorithm. Whitening and dimension reduction can be achieved with principal component analysis or singular value decomposition. Whitening ensures that all dimensions are treated equally a priori before the algorithm is run. Algorithms for ICA include infomax, FastICA, and JADE, but there are many others also.
In general, ICA cannot identify the actual number of source signals, a uniquely correct ordering of the source signals, nor the proper scaling (including sign) of the source signals.
ICA is important to blind signal separation and has many practical applications. It is closely related to (or even a special case of) the search for a factorial code of the data, i.e., a new vector-valued representation of each data vector such that it gets uniquely encoded by the resulting code vector (loss-free coding), but the code components are statistically independent.
Linear independent component analysis can be divided into noiseless and noisy cases, where noiseless ICA is a special case of noisy ICA. Nonlinear ICA should be considered as a separate case.
The data is represented by the random vector and the components as the random vector . The task is to transform the observed data , using a linear static transformation W as
into maximally independent components measured by some function of independence.
The components of the observed random vector are generated as a sum of the independent components , :
weighted by the mixing weights .
The same generative model can be written in vectorial form as , where the observed random vector is represented by the basis vectors . The basis vectors form the columns of the mixing matrix and the generative formula can be written as , where .
Given the model and realizations (samples) of the random vector , the task is to estimate both the mixing matrix and the sources . This is done by adaptively calculating the vectors and setting up a cost function which either maximizes the nongaussianity of the calculated or minimizes the mutual information. In some cases, a priori knowledge of the probability distributions of the sources can be used in the cost function.
The original sources can be recovered by multiplying the observed signals with the inverse of the mixing matrix , also known as the unmixing matrix. Here it is assumed that the mixing matrix is square (). If the number of basis vectors is greater than the dimensionality of the observed vectors, , the task is overcomplete but is still solvable with the pseudo inverse.
With the added assumption of zero-mean and uncorrelated Gaussian noise , the ICA model takes the form .
The mixing of the sources does not need to be linear. Using a nonlinear mixing function with parameters the nonlinear ICA model is .
The independent components are identifiable up to a permutation and scaling of the sources. This identifiability requires that:
A special variant of ICA is Binary ICA in which both signal sources and monitors are in binary form and observations from monitors are disjunctive mixtures of binary independent sources. The problem was shown to have applications in many domains including medical diagnosis, multi-cluster assignment, network tomography and internet resource management.
Let be the set of binary variables from monitors and be the set of binary variables from sources. Source-monitor connections are represented by the (unknown) mixing matrix , where indicates that signal from the i-th source can be observed by the j-th monitor. The system works as follows: at anytime, if a source is active () and it is connected to the monitor () then the monitor will observe some activity (). Formally we have:
where is Boolean AND and is Boolean OR. Note that noise is not explicitly modeled, rather, can be treated as independent sources.
The above problem can be heuristically solved [1] by assuming variables are continuous and running FastICA on binary observation data to get the mixing matrix (real values), then apply round number techniques on to obtain the binary values. This approach has been shown to produce a highly inaccurate result.
Another method is to use dynamic programming: recursively breaking the observation matrix into its sub-matrices and run the inference algorithm on these sub-matrices. The key observation which leads to this algorithm is the sub-matrix of where corresponds to the unbiased observation matrix of hidden components that do not have connection to the -th monitor. Experimental results from [2] show that this approach is accurate under moderate noise levels.